-
Notifications
You must be signed in to change notification settings - Fork 586
Smart cache (RAM context cache) #1851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: concedo_experimental
Are you sure you want to change the base?
Conversation
|
"Hi @LostRuins , I've opened this draft PR as a functional proof-of-concept for the Smart Cache feature. As we discussed, I'd really appreciate your feedback and any help you can offer to refine it before I mark it as ready for a formal review. Thank you!" |
…maxsize (max RAM size) - not ready yet
|
@LostRuins this PR is ready to be reviewed While implementing the Smart Cache feature, I noticed the //No dynamic memory allocation! Setup structs with FIXED (known) shapes and sizes My implementation has a few deviations from these strict rules: Current Choices:
These choices favor:
vs strict compliance which would require:
Question: Are these pragmatic deviations acceptable, or would you prefer |
…-allocate buffers, cache context params
… Remove Redundant max_savestate_slots
|
Thanks @wbruna! All three points addressed:
Changes pushed. Ready for re-review! |
- Add slot pooling to reuse C++ buffers (prevents memory leak) - Skip RAM search for prompts <2048 tokens - Remove misleading is_active flag from stats API - Invalidate slots on eviction to enable pool reuse
|
If in the future this ends up being a half useful feature rather than fully useful due to the potential of needing a LOT more sysRAM, I suppose one option is to specify a maximum context size the user may send over before it gets either truncated before generating the Ctx K-V database, or entirely skipped with the option of reporting this back to the user? My phrasing isn't exactly the most eloquent, but this is what I can muster for linguistics at this time. The assertion of half/full usefulness isn't a dig at you, but rather about what the user would find as a barrier to entry/usage. In any case, with all the newer storage methods and RAM speeds and all that are around lately, the need to actually copy it back to VRAM might be almost redundant depending on the latency introduced by the process of loading it into vram. System ram is blazing fast, and I can attest I have in many situations preferred fully manually offloading KV cache to sysRAM to load all the layers on the GPU because the non-framentation is much more effective. In my case, it's a bit slow albeit still faster than framentation across a GTX1070 & I7 7700K. But that just stands to prove that there is most definitely room for "play" and nonstandard ways of holding onto that KV cache. As to how the faster storage I mentioned comes to play in this: Some NVMe storage devices come quite scary close to the total roundtrip time you'd find for RAM access given the possibly easier ability to access NVMe directly over the bus. That's... in essence no different than the concept of directly attaching your storage to your PCIe bus for that gaming use case. It makes use of the exact justifications and reasons I'm bringing up now. That said, I'm not knowledgeable enough to actually have a firm grasp on what the sort of latency figures related to this all turn out to. I'm merely keeping in mind that there is those extra layers to traverse, in the end. That's especially true for Python, even if cPython is really impressive these days. I'm following this, I'm curious what we end up with for christmas this year! |
|
Consumer gaming PCs have 32 GB ram, sometimes 64 GB (like i do), if you don't use mmap to load the model, most of it is free, unless you use MoE experts on CPU or stuff like that. it's with huge context that this feature truly shines I'm testing this feature using 48GB smart cache, with both AIHorde worker (around 9% cache hit rate, 36h), waidrin (https://github.com/p-e-w/waidrin) and SillyTavern. I can really notice the difference. About the "speed" of moving data RAM <-> VRAM, well.. even with a DDR3 RAM you would still get better speeds then having to preprocess thousands of tokens in case of cache hit. About Storage hierarchy and NVMe speeds, You're absolutely right—modern NVMe (especially Gen4/5) has dramatically arrowed the gap to DRAM latency. The challenge with KV cache specifically is the frequency of access during generation (every token decode touches it), so even 50-100µs NVMe latency vs ~100ns RAM adds up fast. That said, for hibernated slots that aren't actively generating, NVMe could be brilliant—kind of a tiered cache (VRAM → RAM → NVMe). The current implementation keeps everything in RAM because we're using llama.cpp's save_state_kv(), which serializes to a memory buffer. Extending this to NVMe would need a custom serialization path that bypasses the buffer, but it's technically doable. About Direct VRAM offload vs fragmentation, Your GTX 1070 experience is a perfect example—sometimes predictable RAM latency beats fragmented VRAM/split execution. The smart cache sits in a sweet spot for that: it keeps the working set in VRAM while letting you maintain a much larger "recently used" pool in RAM without OOM-ing the GPU. |
|
Hi @Pento95 thanks for your PR. Sorry I didn't get to it earlier, been busy. I took a look at this, and that's an massive amount of code you're adding 😅 . I do like the idea, but (and no offense), don't really like this implementation, especially the changes to koboldcpp.py. Ideally koboldcpp.py should not be managing cached context contents at all - that's so far always transparently handled by the backend (like how context shifting or fast forward is completely transparent to the python side, which simply sends prompts). I think the segment about automatic LRU eviction based on RAM and variable slot counts is also not necessary mainly due to the high complexity. We actually already have the functionality we can hook into for loading and saving states - so the whole side slot system is a little redundant, the goal is just to automate it (it's currently triggered manually) Okay... so let's take a step back. Your core idea is solid. Goal: Allow automatic save and load KV states for reducing reprocessing in cases where we juggle between a small set of repeated prompts (like AI Horde usage). It doesn't actually have much to do with VRAM, we're just reusing the KV state from a past previous prompt. Suggested simple approach:
This should be way simpler and only requires adding 3 new functions, out of which 2 of them are just helper functions for existing code. It does not even need to touch koboldcpp.py except for adding one flag+checkbox that enables/disables smart cache. And it should work as you intended. Thoughts? |
|
Just my $0.02 on a specific point:
At least customizing the number of smart cache slots could be allowed without additional complexity: the algorithm would work pretty much the same, and it could still be controlled by a single config (<=0 to disable, >=1 for the number of available slots). |
Closes #1827
The Problem
As described in issue #1827, frequent context switching (e.g., in multi-user scenarios like AI Horde) causes significant latency. This occurs because the KV cache in VRAM must be discarded and re-calculated from scratch for each new, unrelated prompt, wasting processing time.
The Solution
The Solution
A Multi-Slot RAM KV Cache system using system RAM to save and restore
KV cache snapshots, inspired by llama.cpp's
server_prompt_cache:ContextFastForward(cache hit)This approach drastically reduces latency during context switches, improving efficiency and response speed in multi-user scenarios.
Architecture: Two-Level Cache System
Key Features
--smartcache,--smartcacherammaxsize, and--smartcachethresholdflags for command-line configuration./api/extra/stats/smartcacheendpoint to monitor cache performance and statistics (hit rate, misses, etc.).Hot to use
Smart Cache Two-Level System Commands:
--smartcache Enable smart cache two-level system for intelligent context switching (default: disabled).
--smartcacherammaxsize [GB]
Maximum RAM size in GB for smart cache slots (default: 10GB). Smart cache will create unlimited slots until this RAM limit is reached. Cannot exceed 90% of total system RAM.
--smartcachethreshold [threshold]
Similarity threshold (0.0-1.0) for cache reuse. Values >= threshold use ContextFastForward, values < threshold trigger context switch with RAM search. (default: 0.8)